Goto

Collaborating Authors

 Jersey County


Pentagon yet to ask NORTHCOM to intervene against New Jersey drones

FOX News

Rep. Jeff Van Drew, R-N.J., addresses concerns over mystery drones flying over multiple New Jersey counties and details what sources have told him about their origins. The Pentagon has not yet asked U.S. Northern Command to intervene amid reports of mysterious drones witnessed flying over New Jersey, according to a military spokesperson. The large drone sightings have caused concern and confusion as dozens have been reported and officials are at a loss to explain where they come from. Northern Command confirmed some of the drones have been sighted near U.S. military installations. "We are aware and monitoring the reports of unauthorized drone flights in the vicinity of military installations in New Jersey to include Picatinny Arsenal and Naval Weapons Station Earle, and we refer you to those installations for information on any efforts they are may be conducting to ensure the safety and security of their personnel and operations," a U.S. Nothern Command spokesperson told Fox News Digital.


Mysterious drones are 'changing time' on clocks in New Jersey as locals fear they're being targeted by UFOs

Daily Mail - Science & tech

As waves of loud, car-sized mystery drones continue to buzz over New Jersey, one family reported that the craft changed time on their car's clock. The family of Morris County locals said they were following one of these seemingly terrestrial UFOs in their vehicle, only to experience the odd effect on their car's electronics as the unexplained craft'hovered above them.' 'The clock in their car changed time,' according to one Fox News reporter who spoke to the unnamed family. 'They say the clock went back to normal after they drove off.' While local law enforcement in Morris County has issued a statement asserting that'there is no known threat to public safety' at this time -- the Federal Aviation Administration (FAA) has issued a ban on drone flights over sensitive areas in state.


PEDANTS (Precise Evaluations of Diverse Answer Nominee Text for Skinflints): Efficient Evaluation Analysis and Benchmarking for Open-Domain Question Answering

Li, Zongxia, Mondal, Ishani, Liang, Yijun, Nghiem, Huy, Boyd-Graber, Jordan Lee

arXiv.org Artificial Intelligence

Question answering (QA) can only make progress if we know if an answer is correct, but for many of the most challenging and interesting QA examples, current efficient answer correctness (AC) metrics do not align with human judgments, particularly verbose, free-form answers from large language models (LLMs). There are two challenges: a lack of diverse evaluation data and that models are too big and non-transparent; LLM-based scorers correlate better with humans, but this expensive task has only been tested on limited QA datasets. We rectify these issues by providing guidelines and datasets for evaluating machine QA adopted from human QA community. We also propose an efficient, low-resource, and interpretable QA evaluation method more stable than an exact match and neural methods.


Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation

Sclar, Melanie, West, Peter, Kumar, Sachin, Tsvetkov, Yulia, Choi, Yejin

arXiv.org Artificial Intelligence

We present Referee, a novel framework for sentence summarization that can be trained reference-free (i.e., requiring no gold summaries for supervision), while allowing direct control for compression ratio. Our work is the first to demonstrate that reference-free, controlled sentence summarization is feasible via the conceptual framework of Symbolic Knowledge Distillation (West et al., 2022), where latent knowledge in pre-trained language models is distilled via explicit examples sampled from the teacher models, further purified with three types of filters: length, fidelity, and Information Bottleneck. Moreover, we uniquely propose iterative distillation of knowledge, where student models from the previous iteration of distillation serve as teacher models in the next iteration. Starting off from a relatively modest set of GPT3-generated summaries, we demonstrate how iterative knowledge distillation can lead to considerably smaller, but better summarizers with sharper controllability. A useful by-product of this iterative distillation process is a high-quality dataset of sentence-summary pairs with varying degrees of compression ratios. Empirical results demonstrate that the final student models vastly outperform the much larger GPT3-Instruct model in terms of the controllability of compression ratios, without compromising the quality of resulting summarization.